Goto

Collaborating Authors

 online content moderation


Online content moderation: Can AI help clean up social media?

#artificialintelligence

Dec 20 (Thomson Reuters Foundation) -Two days after it was sued by Rohingya refugees from Myanmar over allegations that it did not take action against hate speech, social media company Meta, formerly known as Facebook, announced a new artificial intelligence system to tackle harmful content. Machine learning tools have increasingly become the go-to solution for tech firms to police their platforms, but questions have been raised about their accuracy and their potential threat to freedom of speech. WHY ARE SOCIAL MEDIA FIRMS UNDER FIRE OVER CONTENT MODERATION? The $150 billion Rohingya class-action lawsuit filed this month came at the end of a tumultuous period for social media giants, which have been criticised for failing to effectively tackle hate speech online and increasing polarization. The complaint argues that calls for violence shared on Facebook contributed to real-world violence against the Rohingya community, which suffered a military crackdown in 2017 that refugees said included mass killings and rape.


Artificial Intelligence and the Future of Online Content Moderation

#artificialintelligence

Yesterday in Berlin, I attended a workshop on the use of artificial intelligence in governing communication online, hosted by the Humboldt Institute for Internet and Society. Section 230 of the CDA provides broad immunity to platforms, with the express goals of promoting economic development and free expression. Daphne Keller has a good summary of the legal landscape on intermediary liability. Platforms are now facing increasing pressure to detect and remove illegal (and, in some cases, legal-but-objectionable) content. In the United States, for example, bills in the House and Senate would remove safe harbor protection for platforms that do not remove illegal content related to sex trafficking.